A Round-up of NVIDIA GTC 2022
Processors, Avatars, the Omniverse & Autonomous Driving

Source: Auto Futures - "Processors, Avatars, the Omniverse & Autonomous Driving – a Round-up of NVIDIA GTC 2022" by Lynn Walford.


 Automotive technology was one of the big draws at NVIDIA’s twice-yearly GTC conference. The brightest minds and experts in the field talked about how NVIDIA technology is being used in autos for autonomous driving, conversational AI, luxury user experiences, and animation.

 

AI in Automotive with Jensen Huang & Toy Jensen

Introduced by AI-composed sweeping orchestral music and poetry expounding what AI can do, Jensen Huang, CEO and Founder of NVIDIA, spoke from a black virtual stage where images, videos and avatars illustrated his presentation.

He started with the technology, platforms, and systems NVIDIA offers across many industries before introducing mind-boggling supercomputers and processors capable of processing petaflops of information.

In the automotive field, he announced new partners, chips, SDKs and use cases. The company’s SDKs, or software development kits, are at the heart of accelerated computing.

With each new SDK, new science, new applications, and new industries can tap into the power of NVIDIA computing. According to Huang, these SDKs tackle the immense complexity at the intersection of computing algorithms and science.

“Future cars will be highly programmable, evolving from many embedded controllers to highly centralized computers,” Huang explained.

“The AI and AV [autonomous vehicle] functionalities will be delivered in software and enhanced for the life of the car. NVIDIA Orin has been enormously successful with companies building this future. Orin is the ideal centralised AV and AI computer – and as the engine of new generation EVs, robotaxis, shuttles, and trucks.”

 

Nvidia Drive Orin

The NVIDIA Drive Orin chip

 

AI-Based Crowdsourced Mapping

Huang announced that Chinese EV manufacturer BYD will be using NVIDIA DRIVE Hyperion 8 architecture with Orin on its vehicles for intelligent driving and parking. This platform, based on DRIVE Orin, is now in production.

A follow-up version, DRIVE Hyperion 9, for automated and autonomous vehicles, will be coming in 2026 production vehicles. The version will be built on multiple DRIVE Atlan computers and will feature the computer architecture, sensor set, and full NVIDIA DRIVE Chauffeur and Concierge applications.

The DRIVE Hyperion 9 architecture includes 14 cameras, nine radars, three lidars, and 20 ultrasonic sensors for automated and autonomous driving, as well as three cameras and one radar for interior occupant sensing.

Other companies also deploying the DRIVE Hyperion ecosystem are DeepRoute, Pegasus, UPower, and WeRide. Luxury EV maker Lucid Motors’ automated driving system, DreamDrive Pro, is also built on NVIDIA DRIVE Hyperion. 

Huang also revealed NVIDIA DRIVE Map, a multimodal mapping platform. It promises to combine the accuracy of DeepMap survey mapping and AI-based crowdsourced mapping.

Huang said, DRIVE Map will provide mapping coverage to half a million kilometres of roadway in North America, Europe, and Asia by 2024.

 

Nvidia Drive Map Night

The Drive Map at night

 

He also introduced new AI-based tools for the NVIDIA DRIVE Sim that reconstruct and modify driving scenarios.

NVIDIA’s Omniverse tech allowed Huang to talk to a ‘Toy Jensen’ digital twin version of himself.

Huang said, “Let me welcome back Toy Jensen, an application we built with Omniverse Avatar. Remember, Toy Jensen is not recorded. He’s completely real-time, making eye contact, making conversation and fully animated.”

 

Screenshot 2022 03 22 174017

 

NVIDIA Covers it All – AI Conversations Understands Who You Are & More

Before the keynote, there was a ‘connect with experts’ session with NVIDIA staff to learn how the company’s tech can be used for autonomous driving and new automotive use cases.

“We’re going through a transformation in automotive. NVIDIA has the hardware and the software stack to support many different use cases,” said Dean Harris, Global BD for Enterprise and Edge AI and HPC, NVIDIA.

These use cases included conversational AI in the data centre for different types of customer service applications or goodwill dynamic offers, recommendations, predictive maintenance, fraud detection, dynamic pricing for rideshare, vehicle visual inspection, and financial services.

“Voice interactions while driving will be a really fun journey. In the beginning, I think you’re going to see voice interactions for distracted and drowsy drivers prompting them to pay attention or when the human has to take over. I think that’s where we’ll see it start,” said Norm Marks, Global Head, Automotive Enterprise, NVIDIA.

“But where I see it going is in an always-watching and always-listening world – we’ll see voice interactions that are a combination of both conversational AI and recommendations.”

Marks gave example where two people are sitting in the front seats of a car, talking about where they should go to lunch. The two passengers in the back, meanwhile, are talking about the baseball game. The AI will know the context and know who is talking.

The assistant will make recommendations for what is being talked about. It will suggest a restaurant, and then be able to make a reservation for the two passengers in the front and give directions.

Additionally, it would also be able to buy tickets to the game for the guys in the back seat.

 

Drive Hyperion 9

NVIDIA’s Drive Hyperion 9

 

“You’ll want to be in a self-driving vehicle of a brand that knows you.”

“Your AI Chauffeur will be your friendly driving companion with NVIDIA Drive,” said Huang while introducing a video showing an autonomously driven NVIDIA Mercedes-Benz saloon.

During the video, the driver, Daniel, tells a white puffy avatar in a split eggshell to start driving and to let Hubert know that he is coming. The avatar then sent a text message to Hubert.

When near the destination, the avatar said, “I see Hubert.”

Hubert then gets in the car and asks the avatar to take him to a hotel. Automakers may also add Avatars of the CEO to join the conversation, experts expect in the future.

“I know different automotive companies are taking that exact use case of a CEO Avatar and applying it to their executives, for example, or people within their own companies. They are developing these avatars for either customer support or a connected dealer-type experience,” explained NVIDIA’s Harris.

“Maybe one day you’ll have an Omniverse avatar where you get to pick who is having the conversation with you. That could be the CEO of the car company,” added Marks.

“You might want to pick your favourite comedian, actor or actress, or someone else for your Avatar. I think it’s going to be a really exciting space down the road and fully applicable for in-vehicle interactions.”

It isn’t just CEO avatars and smart assistants, however. NVIDIA also supports autonomous driving through multiple modes in vehicles and simulations.

“There’s no one prescription for self-driving. So, your journey will be phased. It will take years to be able to do the work and it will take a significant amount of scale,” explained Marks.

The total experience and interaction with the vehicle will be part of the branding and loyalty experience in the future of autonomous vehicles.

“I think the future of loyalty in autonomous vehicles will start with things such as an AI heads-up display so you feel good about how well the AI is working,” continued Marks.

“But eventually, I think just like Netflix has transformed what we watch, this type of interaction will transform loyalty to the brand. You’ll want to be in a self-driving vehicle of a brand that knows you, that listens to you and that can make recommendations.”

 

Avatarinevidiia

 

Why Automakers Want to Make Their Own Software

In another session, ‘Creating Software for the Most Desirable Cars,’ Magnus Östberg, Chief Software Officer, Mercedes-Benz RDNA, explained the company’s software strategy for creating digital luxury.

It is not necessarily the software alone that is becoming the success factor in the automotive industry, said Östberg.

Instead, it is the fusion, the integration of how the software uses the hardware. The company is planning to release MB.OS, its new user interface software, in 2024.

“Mercedes-Benz created a radically new software-driven approach to the UI,” noted Östberg.

“For the first time, we’ve used a game engine to elevate the UI to the next level of digital luxury. In doing so, we partnered with Unity Technologies, whose products are the basis for at least half of the mobile games and augmented realities since 2018.”

The UI experience demonstrates how the real-time graphics enable new digital worlds in the vehicle that instantly respond to the driver’s needs and bring the virtual world into the car.

“It catapults us into a highly responsive, intelligent, and software-driven future. The overall experience is brought to life by our mysterious dark cloud avatar. She is based on a likeness of our namesake, Mercedes Jellenik [for whom the car company was named]. She shapeshifts in response to the driver’s needs and turns the journey into a luxury experience by taking care of everyone,” explained Östberg.

He added the system manages information to make sure the driver has everything they need when they need it. It sounds amazingly real with emotional expressions. It also places the conversation between driver and car on a whole new level that is more natural, intuitive, and even empathic.

“By owning the user interface, we can ensure that true luxury customer experience that only Mercedes-Benz can often by reducing complexity,” he added.

“With proper handling of data, we can optimise customer-centric features and revenues and provide the most seamless intuitive user experience. We are constantly learning from this data and improving our services. Control data comes with great responsibility. And so, it’s just one more reason why we’d rather develop a proprietary always as opposed to outsourcing those costs if we were to relinquish data serenity, we would also relinquish data security.”

Östberg explained that MB.OS is not a basic operating system but is fully comprehensive and enables two-way communication with customers. It will regain time for customers and offer the highest level of driving confidence with an automated drive to ease their lives. With the most seamless and intuitive user experience, it will create continuous excitement with new functions, services, and third-party apps.

“With MB.OS, there will always be something to look forward to because we’ll constantly be integrating further exciting features into our offerings with over-the air updates,” he explained.

 

TuSimple’s & NVIDIA’s Path to Driver-Out Autonomous Trucking

In December last year, TuSimple completed its first autonomous driver-out run. It is the first time that a heavy-duty commercial truck drove on public streets and highways from one hub to another going up to 65 miles per hour. In fact, the nearly two-hour long trip was completed without a human in the cabin.

Driving a truck is different from driving a car. The size, the turning radius, and the braking distance are all different. However, under the context of driver-out operation, the single biggest difference is the speed of driving, explained Xiaodi Hou, TuSimple’s co-founder and CTO.

TuSimple completed the drive with a survey vehicle driving between five and six miles ahead to check for dangerous scenarios. A chase van with a backup driver and the backup test engineers always stayed behind the autonomous truck in the same lane, keeping itself in the blind zone of the trailer.

Unmarked law enforcement vehicles monitored the drive as required by the state of Arizona.

 

Tusimpletruck

 

The Swiss Cheese Approach to Safety

The safety of the system is achieved by multiple levels of redundancy and the appropriate trade-off between availability and reliability, said Hou.

“The guiding principle is to use multi-layer from all over the software or hardware, which is system-level safety precautions. This is often referred to as the Swiss cheese model. Like Swiss cheese, a single layer of the system is full of holes. Nevertheless, once they are stacked on top of each other, the system level is less likely to be penetrated. A good practice is to add more layers for defence during the driver-out program. We are refactoring our algorithm architecture to implement this principle,” he explained.

The gist, according to Hou, is not to achieve absolute safety but to make the failure probability significantly less than that of a human-driven vehicle. To do this, the company uses simulations. However, its system is so powerful, it can predict situations that might not have happened on the road.

“By feeding the algorithm pipeline with previously recorded data, we can evaluate the system behaviours without sending the truck out to the public road,” said Hou.

He added that replay and simulation were only part of the validation process. The system also has to be validated through private track tests for emergency manoeuvres and control stability tests. It also has to go through 16,000 miles of full switch road test after the final code freeze.

“We still have an obvious gap to the commercialization of autonomous driving trucks,” added Hou.

“I believe that the inflection point is around $1 per mile for autonomy. According to the American Transportation Research Institute, the per-mile cost of a human driver in 2020 is 74 cents a mile. This number has gone up due to the increased driver cost and fuel costs. If a contracting truck can’t beat this number, it won’t make any business sense.”

Right now, TuSimple is in the early stage of a multi-year effort of efficiency optimization. As of today, the company is spending more than $1 per mile on hardware maintenance alone. However, according to Hou, this cost will gradually decrease as reliability improves.

“In a game where the winner takes all, we needed a more concrete plan. Earlier this year, TuSimple and NVIDIA announced the co-development of the autonomous domain controller, or the ADC, based on NVIDIA Drive Orin,” he explained.

 

Dancing Data Centre and Processor Chips

The GTC opening keynote ended with supercomputers dancing to the classic swing song, “Sing, Sing, Sing!”.

The cooling fans slid out like trombones while drawers popped out to the beat of the music. NVIDIA Omniverse animated Avatars and data centre parts to dance. Processors span around in geometric formations.

Avatar heads bobbled. The Toy Jensen avatar opened the door to investigate what all the commotion was. Then, the dancing stopped. When Toy Jensen closed the door, the computer parts dance with flourishes of streaming illuminated confetti for a grand finale.


Please read full article HERE.

Tags